Skip to content

fix: pass SCALER_BACKEND=keda to OpenShift E2E test step#979

Open
clubanderson wants to merge 1 commit intomainfrom
fix/978-flaky-lws-scaler-backend
Open

fix: pass SCALER_BACKEND=keda to OpenShift E2E test step#979
clubanderson wants to merge 1 commit intomainfrom
fix/978-flaky-lws-scaler-backend

Conversation

@clubanderson
Copy link
Copy Markdown
Contributor

Summary

  • The CI deploys infrastructure with SCALER_BACKEND: keda (line 697), but the "Run OpenShift E2E tests" step did not pass this env var to the tests
  • Without it, the test defaults to prometheus-adapter and queries the external metrics API directly — KEDA serves that API and returns 500 for unknown metrics instead of 404
  • The test only tolerates NotFound (404), so it fails with InternalError (500)
  • With SCALER_BACKEND=keda set, the test takes the KEDA-aware branch and checks for ScaledObject resources instead

Fixes #978

Test plan

  • OpenShift E2E CI passes (the three LWS specs should no longer 500)
  • Verify SCALER_BACKEND: keda appears in the CI log output

The CI deploys infrastructure with SCALER_BACKEND=keda but the test
step did not pass this env var. The test defaults to prometheus-adapter,
causing it to query the external metrics API directly — KEDA returns
500 (not 404) for unknown metrics, which the test treats as a failure.

With SCALER_BACKEND=keda set, the test correctly checks for ScaledObject
resources instead of querying the external metrics API.

Fixes #978

Signed-off-by: Andrew Anderson <andy@clubanderson.com>
Copilot AI review requested due to automatic review settings April 3, 2026 17:11
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the OpenShift E2E GitHub Actions workflow to ensure the E2E test run uses the same scaler backend (keda) as the infrastructure deployed earlier in the job, preventing tests from taking the wrong (prometheus-adapter) path on OpenShift.

Changes:

  • Pass SCALER_BACKEND: keda into the “Run OpenShift E2E tests” step environment.
  • Print SCALER_BACKEND in the step’s configuration log output for easier debugging.

Comment on lines +1049 to +1050
# Must match the scaler backend used during infrastructure deployment
SCALER_BACKEND: keda
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SCALER_BACKEND is now hardcoded in multiple steps (infra deploy, Model B deploy, and this test step). To avoid future drift (e.g., changing the deploy backend but forgetting to update tests), consider defining SCALER_BACKEND: keda once at the job/workflow env: level and referencing it from each step, or reusing an existing ${{ env.SCALER_BACKEND }} value if you add one earlier.

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

GPU Pre-flight Check ✅

GPUs are available for e2e-openshift tests. Proceeding with deployment.

Resource Total Allocated Available
GPUs 50 32 18
Cluster Value
Nodes 16 (7 with GPUs)
Total CPU 993 cores
Total Memory 10383 Gi
GPUs required 4 (min) / 6 (recommended)

Copy link
Copy Markdown
Collaborator

@mamy-CS mamy-CS left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm, maybe worth checking Copilot review in setting SCALER_BACKEND: keda once in the workflow

@mamy-CS
Copy link
Copy Markdown
Collaborator

mamy-CS commented Apr 3, 2026

@clubanderson
[FAIL] Smoke Tests - Infrastructure Readiness Basic infrastructure validation when using KEDA as scaler backend [It] should have KEDA operator ready [smoke, full]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

e2e flaky lws tests

3 participants